首页> 外文OA文献 >DropSample: A New Training Method to Enhance Deep Convolutional Neural Networks for Large-Scale Unconstrained Handwritten Chinese Character Recognition
【2h】

DropSample: A New Training Method to Enhance Deep Convolutional Neural Networks for Large-Scale Unconstrained Handwritten Chinese Character Recognition

机译:Dropsample:一种增强深度卷积神经网络的训练方法   大规模无约束手写汉字网络   承认

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Inspired by the theory of Leitners learning box from the field of psychology,we propose DropSample, a new method for training deep convolutional neuralnetworks (DCNNs), and apply it to large-scale online handwritten Chinesecharacter recognition (HCCR). According to the principle of DropSample, eachtraining sample is associated with a quota function that is dynamicallyadjusted on the basis of the classification confidence given by the DCNNsoftmax output. After a learning iteration, samples with low confidence willhave a higher probability of being selected as training data in the nextiteration; in contrast, well-trained and well-recognized samples with very highconfidence will have a lower probability of being involved in the next trainingiteration and can be gradually eliminated. As a result, the learning processbecomes more efficient as it progresses. Furthermore, we investigate the use ofdomain-specific knowledge to enhance the performance of DCNN by adding a domainknowledge layer before the traditional CNN. By adopting DropSample togetherwith different types of domain-specific knowledge, the accuracy of HCCR can beimproved efficiently. Experiments on the CASIA-OLHDWB 1.0, CASIA-OLHWDB 1.1,and ICDAR 2013 online HCCR competition datasets yield outstanding recognitionrates of 97.33%, 97.06%, and 97.51% respectively, all of which aresignificantly better than the previous best results reported in the literature.
机译:受来自Leitners心理学领域的学习盒理论的启发,我们提出了DropSample,这是一种用于训练深度卷积神经网络(DCNN)的新方法,并将其应用于大规模的在线手写汉字识别(HCCR)。根据DropSample的原理,每个训练样本都与一个配额函数相关联,该配额函数根据DCNNsoftmax输出给出的分类置信度进行动态调整。经过一次学习迭代,置信度低的样本在下次迭代中更有可能被选作训练数据;相反,训练有素且公认度很高的样本具有很高的置信度,则参与下一次训练迭代的可能性较低,并且可以逐渐消除。结果,随着学习的进行,学习过程变得更加有效。此外,我们研究了通过在传统CNN之前添加域知识层来使用域特定知识来增强DCNN的性能。通过将DropSample与不同类型的特定领域知识一起使用,可以有效地提高HCCR的准确性。在CASIA-OLHDWB 1.0,CASIA-OLHWDB 1.1和ICDAR 2013在线HCCR竞赛数据集上进行的实验分别产生了97.33%,97.06%和97.51%的出色识别率,所有这些都明显优于以前报道的最佳结果。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号